64 research outputs found

    Adapting referring expressions to the task environment

    Get PDF
    When people refer to objects linguistically, they must choose properties of the object that make it possible for the listener to identify the intended referent. We show that this selection of properties not only depends on the task environment but also changes over the course of time. We find that the salient feature color is used less often over time because of its limited utility in our task, while other features with high utility are used more often. We also find that the speaker does not change his/her behavior because of feedback from the interlocutor but because of experience gained when the roles in the task are reversed

    Generating Subsequent Reference in Shared Visual Scenes: Computation vs Re-Use

    Get PDF
    Traditional computational approaches to referring expression generation operate in a deliberate manner, choosing the attributes to be included on the basis of their ability to distinguish the intended referent from its distractors. However, work in psycholinguistics suggests that speakers align their referring expressions with those used previously in the discourse, implying less deliberate choice and more subconscious reuse. This raises the question as to which is a more accurate characterisation of what people do. Using a corpus of dialogues containing 16,358 referring expressions, we explore this question via the generation of subsequent references in shared visual scenes. We use a machine learning approach to referring expression generation and demonstrate that incorporating features that correspond to the computational tradition does not match human referring behaviour as well as using features corresponding to the process of alignment. The results support the view that the traditional model of referring expression generation that is widely assumed in work on natural language generation may not in fact be correct; our analysis may also help explain the oft-observed redundancy found in humanproduced referring expressions.

    Dialogue Reference in a Visual Domain

    Get PDF
    A central purpose of referring expressions is to distinguish intended referents from other entities that are in the context; but how is this context determined? This paper draws a distinction between discourse context –other entities that have been mentioned in the dialogue– and visual context –visually available objects near the intended referent. It explores how these two different aspects of context have an impact on subsequent reference in a dialogic situation where the speakers share both discourse and visual context. In addition we take into account the impact of the reference history –forms of reference used previously in the discourse – on forming what have been called conceptual pacts. By comparing the output of different parameter settings in our model to a data set of human-produced referring expressions, we determine that an approach to subsequent reference based on conceptual pacts provides a better explanation of our data than previously proposed algorithmic approaches which compute a new distinguishing description for the intended referent every time it is mentioned. 1

    Prosodic marking of contrasts in information structure

    Get PDF
    Successful dialogue requires cultivation of com-mon ground (Clark, 1996), shared information, which changes as the conversation proceeds. Dialogue partners can maintain common ground by using different modalities like eye gaze, facial expressions, gesture, content information or in-tonation. Here, we focus on intonation and inves-tigate how contrast in information structure is prosodically marked in spontaneous speech. Combinatory Categorial Grammar (CCG, Steedman 2000) distinguishes theme and rheme as elements of information structure. In some cases they can be distinguished by the pitch ac-cent with which the corresponding words are realised. We experimentally evoke instances o
    • 

    corecore